This paper investigates the problem of Named Entity Recognition (NER) for extreme low-resource languages with only a few hundred tagged data samples. NER is a fundamental task in Natural Language Processing (NLP). A critical driver accelerating NER systems' progress is the existence of large-scale language corpora that enable NER systems to achieve outstanding performance in languages such as English and French with abundant training data. However, NER for low-resource languages remains relatively unexplored. In this paper, we introduce Mask Augmented Named Entity Recognition (MANER), a new methodology that leverages the distributional hypothesis of pre-trained masked language models (MLMs) for NER. The <mask> token in pre-trained MLMs encodes valuable semantic contextual information. MANER re-purposes the <mask> token for NER prediction. Specifically, we prepend the <mask> token to every word in a sentence for which we would like to predict the named entity tag. During training, we jointly fine-tune the MLM and a new NER prediction head attached to each <mask> token. We demonstrate that MANER is well-suited for NER in low-resource languages; our experiments show that for 100 languages with as few as 100 training examples, it improves on state-of-the-art methods by up to 48% and by 12% on average on F1 score. We also perform detailed analyses and ablation studies to understand the scenarios that are best-suited to MANER.
translated by 谷歌翻译
Foveated imaging provides a better tradeoff between situational awareness (field of view) and resolution and is critical in long-wavelength infrared regimes because of the size, weight, power, and cost of thermal sensors. We demonstrate computational foveated imaging by exploiting the ability of a meta-optical frontend to discriminate between different polarization states and a computational backend to reconstruct the captured image/video. The frontend is a three-element optic: the first element which we call the "foveal" element is a metalens that focuses s-polarized light at a distance of $f_1$ without affecting the p-polarized light; the second element which we call the "perifoveal" element is another metalens that focuses p-polarized light at a distance of $f_2$ without affecting the s-polarized light. The third element is a freely rotating polarizer that dynamically changes the mixing ratios between the two polarization states. Both the foveal element (focal length = 150mm; diameter = 75mm), and the perifoveal element (focal length = 25mm; diameter = 25mm) were fabricated as polarization-sensitive, all-silicon, meta surfaces resulting in a large-aperture, 1:6 foveal expansion, thermal imaging capability. A computational backend then utilizes a deep image prior to separate the resultant multiplexed image or video into a foveated image consisting of a high-resolution center and a lower-resolution large field of view context. We build a first-of-its-kind prototype system and demonstrate 12 frames per second real-time, thermal, foveated image, and video capture in the wild.
translated by 谷歌翻译
在现代深网(DNS)中,至关重要的,无处不在且知之甚少的成分是批处理(BN),它以特​​征图为中心并归一化。迄今为止,只有有限的进步才能理解为什么BN会提高DN学习和推理表现。工作专注于表明BN平滑DN的损失格局。在本文中,我们从函数近似的角度从理论上研究BN。我们利用这样一个事实,即当今最先进的DNS是连续的分段仿射(CPA),可以通过定义在输入空间的分区上定义的仿射映射来预测培训数据(所谓的“线性”区域”)。 {\ em我们证明了BN是一种无监督的学习技术,它独立于DN的权重或基于梯度的学习 - 适应DN的样条分区的几何形状以匹配数据。} BN提供了“智能初始化”,可提高“智能初始化” DN学习的性能,因为它甚至适应了以随机权重初始化的DN,以使其样条分区与数据保持一致。我们还表明,微型批次之间的BN统计数据的变化引入了辍学的随机扰动,以对分区边界,因此分类问题的决策边界。每次微型摄入扰动可通过增加训练样本和决策边界之间的边距来减少过度拟合并改善概括。
translated by 谷歌翻译
变形金刚在序列建模及以后取得了显着的成功,但相对于输入序列的长度,二次计算和记忆复杂性遭受了损失。利用技术包括稀疏和线性的注意力和哈希技巧;已经提出了有效的变压器来降低变压器的二次复杂性,但会显着降低准确性。作为响应,我们首先将计算注意图的线性注意力和残差连接解释为梯度下降步骤。然后,我们将动量引入这些组件,并提出\ emph {动量变压器},该动量利用动量来提高线性变压器的精度,同时保持线性内存和计算复杂性。此外,我们制定了一种自适应策略,以根据二次优化的最佳动量计算模型的动量值。这种自适应动量消除了寻找最佳动量值的需求,并进一步增强了动量变压器的性能。包括图像生成和机器翻译在内的自回归和非自动回归任务的一系列实验表明,动量变压器在训练效率和准确性方面优于流行的线性变压器。
translated by 谷歌翻译
We develop new theoretical results on matrix perturbation to shed light on the impact of architecture on the performance of a deep network. In particular, we explain analytically what deep learning practitioners have long observed empirically: the parameters of some deep architectures (e.g., residual networks, ResNets, and Dense networks, DenseNets) are easier to optimize than others (e.g., convolutional networks, ConvNets). Building on our earlier work connecting deep networks with continuous piecewise-affine splines, we develop an exact local linear representation of a deep network layer for a family of modern deep networks that includes ConvNets at one end of a spectrum and ResNets, DenseNets, and other networks with skip connections at the other. For regression and classification tasks that optimize the squared-error loss, we show that the optimization loss surface of a modern deep network is piecewise quadratic in the parameters, with local shape governed by the singular values of a matrix that is a function of the local linear representation. We develop new perturbation results for how the singular values of matrices of this sort behave as we add a fraction of the identity and multiply by certain diagonal matrices. A direct application of our perturbation results explains analytically why a network with skip connections (such as a ResNet or DenseNet) is easier to optimize than a ConvNet: thanks to its more stable singular values and smaller condition number, the local loss surface of such a network is less erratic, less eccentric, and features local minima that are more accommodating to gradient-based optimization. Our results also shed new light on the impact of different nonlinear activation functions on a deep network's singular values, regardless of its architecture.
translated by 谷歌翻译
知识追踪是指估计每个学生的知识组成部分/技能掌握水平的问题,从他们过去对教育应用中的问题的回答。一种直接的收益知识追踪方法提供的是能够在未来问题上预测每个学生的表现。但是,大多数现有知识追踪方法的一个关键限制是,他们将学生对问题的回答视为二进制评估,即是正确的还是不正确的。响应正确性分析/预测易于导航,但会丢失重要信息,尤其是对于开放式问题:确切的学生回答可能会提供有关其知识状态的更多信息,而不是仅仅是响应正确性。在本文中,我们首次介绍了对开放式知识追踪的探索,即,在知识跟踪设置中,学生对学生对问题的开放式回答的分析和预测。我们首先制定了一个通用框架,用于开放式知识跟踪,然后通过编程问题详细介绍其在计算机科学教育领域的应用。我们在该域中定义了一系列评估指标,并进行了一系列定量和定性实验,以测试现实世界中学生代码数据集中开放式知识跟踪方法的边界。
translated by 谷歌翻译
我们引入了一种新的神经信号模型,设计用于有效的大型信号的高分辨率表示。我们的多尺度隐式神经表示(矿工)中的关键创新是通过拉普拉斯金字塔的内部表示,它提供了信号的稀疏多尺度分解,可捕获跨尺度的信号的正交部分。我们通过用小型MLP在每个尺度上代表金字塔的小差异斑块来利用拉普拉斯金字塔的优势。这使网络能够适应从粗尺度到细尺度的能力增加,仅代表具有强信号能量的信号的一部分。每个MLP的参数是从粗到细节优化的,从而在更粗糙的尺度下更快地近似,从而最终是一个非常快速的训练过程。我们将矿工应用于一系列大规模信号表示任务,包括吉吉像素图像和非常大的点云,并证明它需要少于参数的25%,33%的内存足迹和10%的计算时间和10%竞争技术(例如橡子)达到相同的表示准确性。
translated by 谷歌翻译
A surprising phenomenon in modern machine learning is the ability of a highly overparameterized model to generalize well (small error on the test data) even when it is trained to memorize the training data (zero error on the training data). This has led to an arms race towards increasingly overparameterized models (c.f., deep learning). In this paper, we study an underexplored hidden cost of overparameterization: the fact that overparameterized models may be more vulnerable to privacy attacks, in particular the membership inference attack that predicts the (potentially sensitive) examples used to train a model. We significantly extend the relatively few empirical results on this problem by theoretically proving for an overparameterized linear regression model in the Gaussian data setting that membership inference vulnerability increases with the number of parameters. Moreover, a range of empirical studies indicates that more complex, nonlinear models exhibit the same behavior. Finally, we extend our analysis towards ridge-regularized linear regression and show in the Gaussian data setting that increased regularization also increases membership inference vulnerability in the overparameterized regime.
translated by 谷歌翻译
多头注意力是最先进的变压器背后的推动力,它在各种自然语言处理(NLP)和计算机视觉任务中实现了出色的性能。已经观察到,对于许多应用,这些注意力头会学习冗余嵌入,并且大多数可以在不降低模型性能的情况下去除。受到这一观察的启发,我们提出了变压器的混合物(变压器-MGK)的混合物,这是一种新型的变压器架构,用每个头部的钥匙混合了变压器中的冗余头部。这些键的混合物遵循高斯混合模型,并使每个注意力头有效地集中在输入序列的不同部分上。与传统的变压器对应物相比,变压器-MGK会加速训练和推理,具有较少的参数,并且需要更少的拖船来计算,同时实现跨任务的可比性或更高的准确性。 Transformer-MGK也可以轻松扩展到线性注意力。我们从经验上证明了在一系列实用应用中变形金属MGK的优势,包括语言建模和涉及非常长序列的任务。在Wikitext-103和远程竞技场基准中,具有4个头部的变压器MGK具有与基线变压器具有8个头的可比性或更好的性能。
translated by 谷歌翻译
我们制定了一种评估给定两组图像的生成网络性能的度量。当前用于执行此操作的流行绩效指标是Fr \'Echet Inception距离(FID)。 FID假设使用Inception-V3的倒数第二层遵循高斯分布来特征的图像,如果我们希望将FID用作度量标准,则不会违反这种假设。但是,我们表明,ImakeNet数据集的Inception-V3特征不是高斯。特别是,每个边缘都不是高斯。为了解决这个问题,我们使用高斯混合模型(GMM)对特征图像进行建模,并计算限于GMM的2-Wasserstein距离。我们通过使用Inception-V3(或其他分类器)在两组图像上定义了一个称为WAM的性能度量,以表征图像,估算两个GMM,并使用受限的$ 2 $ - WASSERSTEIN距离比较GMMS。我们通过实验表明WAM比FID的优势,包括FID比WAM对不可察觉的图像扰动更敏感。通过建模从Inception-V3作为GMM获得的非高斯特征并使用GMM度量,我们可以更准确地评估生成网络性能。
translated by 谷歌翻译